Camera settings for VidSync
Using your camera to the best of its ability is important to getting the most out of VidSync data. This means choosing the right settings, discussed here, as well as paying attention to the lighting, contrast, and other qualities of the scene.
Many people use VidSync with GoPro cameras, which do not require many of the fine settings controls discussed below. However, there are some important decisions to make about GoPro settings. See the GoPro section of the video cameras and mounts page for some recommendations, or this 2013 article on GoPro use for a still-relevant (as of 2015) summary of GoPro modes.
Focus settings
Autofocus systems should be disabled, because changing the focus alters the optical geometry of the system and invalidates the camera’s calibration.
Users of adjustable manual-focus cameras should experiment with their systems to determine the best focal distance, because the optics of housing ports and the air-water interface affect focus in difficult-to-predict ways, and the camera’s stated focal distance will be very far from the distance at which it is actually focused through the housing and water. In the case of dome ports, the entire image the camera is seeing is actually a "virtual image" projected from 0 to around 0.5 m (or less!) outside the port, even if the real subjects are several meters away, and lenses must focus in close to capture this image.
Make sure your focus settings aren't be accidentally changed during handling of the cameras between calibration and measurement. For example, I've found that the focus rings on DSLR lenses can sometimes slip during this handling, and it's helpful to secure them with a ring of electrical tape.
Focal length ("zoom") settings
The decision of which focal length to use (which is the same as deciding which zoom setting to use on some consumer cameras, or choosing between wide/medium/narrow on GoPros) depends obviously on field-of-view you require, but there are some less intuitive tradeoffs worth considering.
Generally, you can get the same field of view by being up close to the subject with a wide view, or farther away with a narrow view. Sometimes the environment dictates which of these is more practical, when you're filming in tight spaces and can't place the cameras far from your subjects, or, alternatively, when your subjects won't let you get very close.
Other times, the choice comes down to optical considerations. A disadvantage of greater distance, especially for underwater work, is that putting more water between your cameras and your subject can greatly reduce contrast at the subject and put a lot of light-scattering debris in the way. There is an advantage, however, if you're trying to observe subjects at varying distances from the camera. Consider, for example, observing two fish, one of which is 1 m behind the other. If you place the camera 0.2 m from the first fish with a wide angle view, you'll get a large and detailed view of the nearby fish and a relatively tiny view of the other fish 1.2 m (600% farther) away. But if you place the cameras 3 m away from the first fish and zoom in, you'll have roughly the same view of the near fish 3 m away but a much better view of the other fish 4 m (33% farther) away. For most underwater field work, it's ideal to use a wide view and get up close to the subject. However, for lab experiments where you're viewing fish through the side of a stream tank, it might be best to back the cameras off and zoom in. It's hard to know in advance what setup will be best for your study; just be mindful of the tradeoffs involved, and test configurations before committing to them.
As with focus settings, zoom mechanisms can sometimes slip during handling, and it can be useful to put electrical tape around a lens's zoom ring or otherwise secure whatever kind of zoom mechanism you're using.
Exposure settings
Most video cameras do a good job handling exposure automatically. If you're using more advanced equipment and require custum control, these are the basics:
- Aperture (or "f/stop") is the size of the opening inside the lens through which light passes. Using a narrow aperture (high f-stop) allows less light to reach the sensor, but it increases your depth-of-field (the distance from near to far that appears to be sharply in focus). Because behavioral studies underwater often require being up close to the subjects and having them in focus at varying distances, keeping a high depth-of-field with an f/stop like f/8 or f/11 is often ideal.
- Shutter speed is how long the sensor is exposed to the light for each frame. Underwater video with default settings very often uses a relatively slow shutter speed that produce some motion blurring in the individual paused frames we analyze in VidSync. So it can be beneficial, especially when the motion is the subject of the study (e.g., studying swimming maneuvers), to use fast shutter speed (like 1/250 or even 1/500 s) to freeze motion and provide crisp images of moving subjects in every frame.
- ISO is the sensitivity of your sensor. A high ISO introduces noise, or graininess, to the image, but good cameras can go pretty high before this becomes unacceptable from a data quality standpoint. A slightly grainy, high-ISO video might be slightly less beautiful, but it can allow you to simultaneously achieve both a high depth-of-field (via a narrow aperture, a.k.a. high f/stop) and eliminate motion blur (via a fast shutter speed). It's good not to get too grainy, but this is probably the one element of exposure in which a camera's autoexposure is most likely to err on the side of aesthetics for normal users instead of improving the quality of still frames for scientific analysis.
Again, autoexposed video works fine for most things. Just be aware of the possible reasons one might want to get more detailed.
Video resolution, scan mode, and framerate
Labels like "1080p30" and "1080i60" describe the resolution and scan mode of the video. The first number is the vertical resolution; the "p" stands for progressive scan and the "i" for interlaced; the last number is the number of frames per second.
Interlaced video is becoming less common, and it's terrible for Vidsync or any other scientific analysis. When it's paused, any moving object will appear as two images, with every other horizontal line of the image belonging to an alternate image shot at slightly different consecutive times. Video transcoding software such as Apple Compressor can convert interlaced videos into passable progressive-scan videos by interpolating data, but it is better and much easier to record using progressive scan in the first place.
The most common framerate of 30 frames per second (fps) is adequate for most VidSync work. A higher framerate will allow more precise synchronization (to within the nearest 1/120 s for 60 fps, as opposed to the nearest 1/60 s for 30 fps), but using a higher framerate requires more disk space, lower resolution, or both. It is probably only necessary for very specialized applications trying to resolve fine temporal detail of fast-moving subjects.
High resolution increases visual detail, at the expense of storage space and processing power required to play multiple files. The most common resolution today is "full HD," 1920 x 1080 pixels. There is little reason to use anything less, unless you're stuck with very old computers (low-end, pre-2010 models) or very little disk space for storage, or you really don't need the detail (e.g, if you're only doing length measurements of easily visible fish near the cameras).
High-end new cameras (as of 2015) may offer 4K video footage, which has roughly double the horizontal and vertical resolution of 1080p footage. (However, in some lower-end consumer models, 4k may be a bit of a gimmick that doesn't really offer improved quality, so read reviews carefully.) I haven't yet experimented with this currently expensive technology, but it is promising for fine-detail applications. VidSyc has not been tested with 4K to my knowledge, but there's no reason it shouldn't work, as long as the computer is powerful enough to play the synced video clips at the same time.
File formats
Preserving the finest details in video footage requires some attention not only on how the video was filmed, but also on how it is saved and encoded digitally. This process will vary among users. Files will play in VidSync as long as they're in any format compatible with Quicktime player on the same computer (.mov or .mp4 files using the h.264 codec are a good choice), but quality and file size depend on compression settings. Some cameras record to unusual, compressed formats like AVCHD, or in formats with little to no compression such as Apple ProRes. In both cases, conversion of the files is required to make the videos playable on a computer, bring the files down to a reasonable size, or both. but we describe our own steps here as an example. Our Sony ® HDR-SR12 cameras recorded 1080i video on internal hard drives in AVCHD format, and we used the “Log and Transfer” function of Apple Final Cut Pro ® 6 to import videos as QuickTime .mov files encoded using the Apple Intermediate Codec. Files using this low-compression codec took too much space (~120 gb per camera for 2 hours) and the files from our cameras were interlaced, so we used Apple Final Cut Compressor 3 to create the final deinterlaced .mov files in the H.264 codec with a 4 mb/s bitrate (about 30gb per camera for 2 hours of footage). Bitrate controls the tradeoff between image quality and file size; we chose 4 mb/s after determining by trial-and-error that it was the smallest value that preserved the very fine detail we required. We preserved the original AVCHD files as disk images (.dmg files) of the camera hard drive contents, and we recommend such preservation of the raw data to all users, so footage can be re-imported later using different settings (e.g., a higher bitrate for more detail) if needed.